skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Wang, William"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. The study of space-bounded computation has drawn extensively from ideas and results in the field of communication complexity. Catalytic Computation (Buhrman, Cleve, Koucký, Loff and Speelman, STOC 2013) studies the power of bounded space augmented with a pre-filled hard drive that can be used non-destructively during the computation. Presently, many structural questions in this model remain open. Towards a better understanding of catalytic space, we define a model of catalytic communication complexity and prove new upper and lower bounds. In our model, Alice and Bob share a blackboard with a tiny number of free bits, and a larger section with an arbitrary initial configuration. They must jointly compute a function of their inputs, communicating only via the blackboard, and must always reset the blackboard to its initial configuration. We prove several upper and lower bounds: 1) We characterize the simplest nontrivial model, that of one bit of free space and three rounds, in terms of 𝔽₂ rank. In particular, we give natural problems that are solvable with a minimal-sized blackboard that require near-maximal (randomized) communication complexity, and vice versa. 2) We show that allowing constantly many free bits, as opposed to one, allows an exponential improvement on the size of the blackboard for natural problems. To do so, we connect the problem to existence questions in extremal graph theory. 3) We give tight connections between our model and standard notions of non-uniform catalytic computation. Using this connection, we show that with an arbitrary constant number of rounds and bits of free space, one can compute all functions in TC⁰. We view this model as a step toward understanding the value of filled space in computation. 
    more » « less
    Free, publicly-accessible full text available January 7, 2026
  2. Counterfactual explanations of Graph Neural Networks (GNNs) offer a powerful way to understand data that can naturally be represented by a graph structure. Furthermore, in many domains, it is highly desirable to derive data-driven global explanations or rules that can better explain the high-level properties of the models and data in question. However, evaluating global counterfactual explanations is hard in real-world datasets due to a lack of human-annotated ground truth, which limits their use in areas like molecular sciences. Additionally, the increasing scale of these datasets provides a challenge for random search-based methods. In this paper, we develop a novel global explanation model RLHEX for molecular property prediction. It aligns the counterfactual explanations with humandefined principles, making the explanations more interpretable and easy for experts to evaluate. RLHEX includes a VAE-based graph generator to generate global explanations and an adapter to adjust the latent representation space to human-defined principles. Optimized by Proximal Policy Optimization (PPO), the global explanations produced by RLHEX cover 4.12% more input graphs and reduce the distance between the counterfactual explanation set and the input set by 0.47% on average across three molecular datasets. RLHEX provides a flexible framework to incorporate different human-designed principles into the counterfactual explanation generation process, aligning these explanations with domain expertise. The code and data are released at https://github.com/dqwang122/RLHEX. 
    more » « less
  3. Optical applications of plasmonic nanoparticles depend critically on particle properties such as relative proximity, composition, crystallinity, and shape. The most common nanoparticle fabrication techniques, colloidal synthesis and electron beam lithography, allow the tailoring of some of these parameters, yet do not provide control over all of them. Scanning probe block copolymer lithography (SPBCL), a technique that grows nanoparticles on substrates from precisely deposited precursor droplets, merges the advantages of colloidal synthesis and electron beam lithography, and offers high throughput, precise particle positioning, and composition control. A few challenges with the SBCL method remain: fabrication of optically relevant particle sizes on optically transparent supports, and detailed correlation of their optical and morphological properties. Here, we adapt SPBCL to fabricate large arrays of gold nanoparticles on glass supports. The resulting nanoparticles have varying shapes, and at ∼100 nm in diameter, they support strong plasmon resonances. In order to fully exploit the high-throughput fabrication method, we designed an automated dark-field microscope and correlated the optical behavior to the mechanical properties as determined through electron and pump–probe microscopy. We find that the SPBCL-synthesized nanoparticles are highly crystalline, supporting both plasmon oscillations and mechanical vibrations with lifetimes comparable to colloidal nanospheres. Our work highlights SPBCL as a promising and versatile synthesis approach for plasmonic nanoparticles, leading the way toward extensive screening capabilities for optical properties and hence improved potential applications. 
    more » « less
  4. Krause, Andreas and; Brunskill, Emma; Cho, Kyunghyun; Engelhardt, Barbara; Sabato, Sivan; Scarlett, Jonathan (Ed.)
    Behavior constrained policy optimization has been demonstrated to be a successful paradigm for tackling Offline Reinforcement Learning. By exploiting historical transitions, a policy is trained to maximize a learned value function while constrained by the behavior policy to avoid a significant distributional shift. In this paper, we propose our closed-form policy improvement operators. We make a novel observation that the behavior constraint naturally motivates the use of first-order Taylor approximation, leading to a linear approximation of the policy objective. Additionally, as practical datasets are usually collected by heterogeneous policies, we model the behavior policies as a Gaussian Mixture and overcome the induced optimization difficulties by leveraging the LogSumExp’s lower bound and Jensen’s Inequality, giving rise to a closed-form policy improvement operator. We instantiate both one-step and iterative offline RL algorithms with our novel policy improvement operators and empirically demonstrate their effectiveness over state-of-the-art algorithms on the standard D4RL benchmark. Our code is available at https://cfpi-icml23.github.io/. 
    more » « less